57 research outputs found

    Tight Global Linear Convergence Rate Bounds for Douglas-Rachford Splitting

    Full text link
    Recently, several authors have shown local and global convergence rate results for Douglas-Rachford splitting under strong monotonicity, Lipschitz continuity, and cocoercivity assumptions. Most of these focus on the convex optimization setting. In the more general monotone inclusion setting, Lions and Mercier showed a linear convergence rate bound under the assumption that one of the two operators is strongly monotone and Lipschitz continuous. We show that this bound is not tight, meaning that no problem from the considered class converges exactly with that rate. In this paper, we present tight global linear convergence rate bounds for that class of problems. We also provide tight linear convergence rate bounds under the assumptions that one of the operators is strongly monotone and cocoercive, and that one of the operators is strongly monotone and the other is cocoercive. All our linear convergence results are obtained by proving the stronger property that the Douglas-Rachford operator is contractive

    Local Convergence of Proximal Splitting Methods for Rank Constrained Problems

    Full text link
    We analyze the local convergence of proximal splitting algorithms to solve optimization problems that are convex besides a rank constraint. For this, we show conditions under which the proximal operator of a function involving the rank constraint is locally identical to the proximal operator of its convex envelope, hence implying local convergence. The conditions imply that the non-convex algorithms locally converge to a solution whenever a convex relaxation involving the convex envelope can be expected to solve the non-convex problem.Comment: To be presented at the 56th IEEE Conference on Decision and Control, Melbourne, Dec 201

    Low-Rank Inducing Norms with Optimality Interpretations

    Full text link
    Optimization problems with rank constraints appear in many diverse fields such as control, machine learning and image analysis. Since the rank constraint is non-convex, these problems are often approximately solved via convex relaxations. Nuclear norm regularization is the prevailing convexifying technique for dealing with these types of problem. This paper introduces a family of low-rank inducing norms and regularizers which includes the nuclear norm as a special case. A posteriori guarantees on solving an underlying rank constrained optimization problem with these convex relaxations are provided. We evaluate the performance of the low-rank inducing norms on three matrix completion problems. In all examples, the nuclear norm heuristic is outperformed by convex relaxations based on other low-rank inducing norms. For two of the problems there exist low-rank inducing norms that succeed in recovering the partially unknown matrix, while the nuclear norm fails. These low-rank inducing norms are shown to be representable as semi-definite programs. Moreover, these norms have cheaply computable proximal mappings, which makes it possible to also solve problems of large size using first-order methods

    SVAG: Stochastic Variance Adjusted Gradient Descent and Biased Stochastic Gradients

    Full text link
    We examine biased gradient updates in variance reduced stochastic gradient methods. For this purpose we introduce SVAG, a SAG/SAGA-like method with adjustable bias. SVAG is analyzed under smoothness assumptions and we provide step-size conditions for convergence that match or improve on previously known conditions for SAG and SAGA. The analysis highlights a step-size requirement difference between when SVAG is applied to cocoercive operators and when applied to gradients of smooth functions, a difference not present in ordinary gradient descent. This difference is verified with numerical experiments. A variant of SVAG that adaptively selects the bias is presented and compared numerically to SVAG on a set of classification problems. The adaptive SVAG frequently performs among the best and always improves on the worst-case performance of the non-adaptive variant

    Optimal Convergence Rates for Generalized Alternating Projections

    Full text link
    Generalized alternating projections is an algorithm that alternates relaxed projections onto a finite number of sets to find a point in their intersection. We consider the special case of two linear subspaces, for which the algorithm reduces to a matrix teration. For convergent matrix iterations, the asymptotic rate is linear and decided by the magnitude of the subdominant eigenvalue. In this paper, we show how to select the three algorithm parameters to optimize this magnitude, and hence the asymptotic convergence rate. The obtained rate depends on the Friedrichs angle between the subspaces and is considerably better than known rates for other methods such as alternating projections and Douglas-Rachford splitting. We also present an adaptive scheme that, online, estimates the Friedrichs angle and updates the algorithm parameters based on this estimate. A numerical example is provided that supports our theoretical claims and shows very good performance for the adaptive method.Comment: 20 pages, extended version of article submitted to CD

    On feasibility, stability and performance in distributed model predictive control

    Full text link
    In distributed model predictive control (DMPC), where a centralized optimization problem is solved in distributed fashion using dual decomposition, it is important to keep the number of iterations in the solution algorithm, i.e. the amount of communication between subsystems, as small as possible. At the same time, the number of iterations must be enough to give a feasible solution to the optimization problem and to guarantee stability of the closed loop system. In this paper, a stopping condition to the distributed optimization algorithm that guarantees these properties, is presented. The stopping condition is based on two theoretical contributions. First, since the optimization problem is solved using dual decomposition, standard techniques to prove stability in model predictive control (MPC), i.e. with a terminal cost and a terminal constraint set that involve all state variables, do not apply. For the case without a terminal cost or a terminal constraint set, we present a new method to quantify the control horizon needed to ensure stability and a prespecified performance. Second, the stopping condition is based on a novel adaptive constraint tightening approach. Using this adaptive constraint tightening approach, we guarantee that a primal feasible solution to the optimization problem is found and that closed loop stability and performance is obtained. Numerical examples show that the number of iterations needed to guarantee feasibility of the optimization problem, stability and a prespecified performance of the closed-loop system can be reduced significantly using the proposed stopping condition

    Modeling and Control of a 1.45 m deformable mirror

    Get PDF
    The eagerness among astronomers to gain deeper knowledge and further understanding of the universe around us sets requirements for future telescopes.Images of more distant stars with higher spatial resolution is desired. Extremely Large Telescopes (ELTs) are being developed for this purpose. An example is a European collaboration that is developing an ELT called the Euro50. To obtain the desired resolution, disturbances that affect the incoming light need to be compensated for. This compensation is achieved by a constant reshaping of the secondary mirror in the telescope. The mirror reshaping is performed by force actuators that need to be controlled at a high bandwidth. The purpose of this Master Thesis is to derive a control law for a 1.45 m deformable mirror. The control strategy for this control law should be applicable and implementable in the secondary mirror of Euro50. The proposed control system strategy consists of actuator state feedback controllers of PD type and local observers that estimate the required states for the feedback. This Master Thesis has been made in a joint collaboration between Lund Observatory at Lund University and the Department of Automatic Control at Lund Institute of Technology
    • …
    corecore